نتایج جستجو برای: multimodal translation
تعداد نتایج: 161872 فیلتر نتایج به سال:
abstractthe aim of intercultural translation is to communicate. communication is acted via verbal as well as visual means. the interaction of verbal and visual means of communication makes a set of complex situations which demand special attention in translation. one context in which the interaction of visual and verbal elements gets vital importance is children’s picture books. color is an int...
A stand-in is a common technique for movies and TV programs in foreign languages. The current stand-in that only substitutes the voice channel results awkward matching to the mouth motion. Videophone with automatic voice translation are expected to be widely used in the near future, which may face the same problem without lip-synchronized speaking face image translation. We introduce a multi-mo...
A modern text is seldom a stand-alone product consisting merely of writing, but a multimodal body of signs from different meaning-making systems. The multimodal context and the new forms of translation, in which the concept of ‘text’ goes beyond language, have become increasingly relevant in contemporary translation research and practice. In order to study multimodal context and intermodal form...
This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation. We mainly explored two multimodal architectures where either global visual features or convolutional feature maps are integrated in order to benefit from visual context. Our final systems ranked first for both En→De and En→Fr language pa...
This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation...
We present the results from the second shared task on multimodal machine translation and multilingual image description. Nine teams submitted 19 systems to two tasks. The multimodal translation task, in which the source sentence is supplemented by an image, was extended with a new language (French) and two new test sets. The multilingual image description task was changed such that at test time...
We propose a multimodal neural machine translation (MNMT) method with semantic image regions called region-attentive (RA-NMT). Existing studies on MNMT have mainly focused employing global visual features or equally sized grid local extracted by convolutional networks (CNNs) to improve performance. However, they neglect the effect of information captured inside features. This study utilizes obj...
Multimodal machine translation is the task of translating sentences in a visual context. We decompose this problem into two sub-tasks: learning to translate and learning visually grounded representations. In a multitask learning framework, translations are learned in an attention-based encoderdecoder, and grounded representations are learned through image representation prediction. Our approach...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید